AI Ethics: “When the robots do take over, at least they could be nice.”
Kriti Sharma (@sharma_kriti), AI technologist, business executive and humanitarian, gave the opening keynote at CILIP Conference 2019 with a five point guide to keeping your future AI projects ethical.
After running through the many ethical problems AI has run into in areas ranging from face recognition to recruitment, Kriti said anyone looking at AI would do well to check:
1.That AI should reflect the diversity of the users: is it working for everyone? Who is being left behind?
2. Is AI being held to account? Don’t let AI do things you wouldn’t let a human do. Whether it’s discriminating against women in a hiring process or discussing how self-driving cars can be made accountable.
3. Transparency: Show what’s working. AI can be very complicated and we need to make them accessible. But also users need to know simple things, early on, like are they talking to a human or a machine.
4. Positive: What are you using it for? One of the biggest uses of AI in the real world is helping people click more ads. Or driving digital addiction. Are you doing something better? Kriti is a founder of AI For Good.
5. Jobs: AI is going to replace some jobs and create new ones. Need to see the impact and understand what effect it’s going to have.
On this last point Kriti gave an example of AI designed to help a customer support queries. It aimed to take over 80% of the mundane work, leaving humans to work on thee 20% complex problems. However a survey showed that workers saw the change as “My job used to be 80% easy, 20% difficult. Now it’s 100% difficult.”
Kriti recommended employers looking at upskilling and working policy alongside AI projects that might fundamentally change working practice.
In her talk Kriti also said that if we can get the ethics in place “When the robots do take over, at least they could be nice!”